20 resultados para patient outcomes

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Evidence is lacking to inform providers' and patients' decisions about many common treatment strategies for patients with end stage renal disease (ESRD). METHODS/DESIGN: The DEcIDE Patient Outcomes in ESRD Study is funded by the United States (US) Agency for Health Care Research and Quality to study the comparative effectiveness of: 1) antihypertensive therapies, 2) early versus later initiation of dialysis, and 3) intravenous iron therapies on clinical outcomes in patients with ESRD. Ongoing studies utilize four existing, nationally representative cohorts of patients with ESRD, including (1) the Choices for Healthy Outcomes in Caring for ESRD study (1041 incident dialysis patients recruited from October 1995 to June 1999 with complete outcome ascertainment through 2009), (2) the Dialysis Clinic Inc (45,124 incident dialysis patients initiating and receiving their care from 2003-2010 with complete outcome ascertainment through 2010), (3) the United States Renal Data System (333,308 incident dialysis patients from 2006-2009 with complete outcome ascertainment through 2010), and (4) the Cleveland Clinic Foundation Chronic Kidney Disease Registry (53,399 patients with chronic kidney disease with outcome ascertainment from 2005 through 2009). We ascertain patient reported outcomes (i.e., health-related quality of life), morbidity, and mortality using clinical and administrative data, and data obtained from national death indices. We use advanced statistical methods (e.g., propensity scoring and marginal structural modeling) to account for potential biases of our study designs. All data are de-identified for analyses. The conduct of studies and dissemination of findings are guided by input from Stakeholders in the ESRD community. DISCUSSION: The DEcIDE Patient Outcomes in ESRD Study will provide needed evidence regarding the effectiveness of common treatments employed for dialysis patients. Carefully planned dissemination strategies to the ESRD community will enhance studies' impact on clinical care and patients' outcomes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent investigation has identified association of IL-12p40 blood levels with melanoma recurrence and patient survival. No studies have investigated associations of single-nucleotide polymorphisms (SNPs) with melanoma patient IL-12p40 blood levels or their potential contributions to melanoma susceptibility or patient outcome. In the current study, 818,237 SNPs were available for 1,804 melanoma cases and 1,026 controls. IL-12p40 blood levels were assessed among 573 cases (discovery), 249 cases (case validation), and 299 controls (control validation). SNPs were evaluated for association with log[IL-12p40] levels in the discovery data set and replicated in two validation data sets, and significant SNPs were assessed for association with melanoma susceptibility and patient outcomes. The most significant SNP associated with log[IL-12p40] was in the IL-12B gene region (rs6897260, combined P=9.26 × 10(-38)); this single variant explained 13.1% of variability in log[IL-12p40]. The most significant SNP in EBF1 was rs6895454 (combined P=2.24 × 10(-9)). A marker in IL12B was associated with melanoma susceptibility (rs3213119, multivariate P=0.0499; OR=1.50, 95% CI 1.00-2.24), whereas a marker in EBF1 was associated with melanoma-specific survival in advanced-stage patients (rs10515789, multivariate P=0.02; HR=1.93, 95% CI 1.11-3.35). Both EBF1 and IL12B strongly regulate IL-12p40 blood levels, and IL-12p40 polymorphisms may contribute to melanoma susceptibility and influence patient outcome.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

BACKGROUND: Coronary artery bypass grafting (CABG) is often used to treat patients with significant coronary heart disease (CHD). To date, multiple longitudinal and cross-sectional studies have examined the association between depression and CABG outcomes. Although this relationship is well established, the mechanism underlying this relationship remains unclear. The purpose of this study was twofold. First, we compared three markers of autonomic nervous system (ANS) function in four groups of patients: 1) Patients with coronary heart disease and depression (CHD/Dep), 2) Patients without CHD but with depression (NonCHD/Dep), 3) Patients with CHD but without depression (CHD/NonDep), and 4) Patients without CHD and depression (NonCHD/NonDep). Second, we investigated the impact of depression and autonomic nervous system activity on CABG outcomes. METHODS: Patients were screened to determine whether they met some of the study's inclusion or exclusion criteria. ANS function (i.e., heart rate, heart rate variability, and plasma norepinephrine levels) were measured. Chi-square and one-way analysis of variance were performed to evaluate group differences across demographic, medical variables, and indicators of ANS function. Logistic regression and multiple regression analyses were used to assess impact of depression and autonomic nervous system activity on CABG outcomes. RESULTS: The results of the study provide some support to suggest that depressed patients with CHD have greater ANS dysregulation compared to those with only CHD or depression. Furthermore, independent predictors of in-hospital length of stay and non-routine discharge included having a diagnosis of depression and CHD, elevated heart rate, and low heart rate variability. CONCLUSIONS: The current study presents evidence to support the hypothesis that ANS dysregulation might be one of the underlying mechanisms that links depression to cardiovascular CABG surgery outcomes. Thus, future studies should focus on developing and testing interventions that targets modifying ANS dysregulation, which may lead to improved patient outcomes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Injuries represent a significant and growing public health concern in the developing world, yet their impact on patients and the emergency health-care system in the countries of East Africa has received limited attention. This study evaluates the magnitude and scope of injury related disorders in the population presenting to a referral hospital emergency department in northern Tanzania. METHODS: A retrospective chart review of patients presenting to the emergency department at Kilimanjaro Christian Medical Centre was performed. A standardized data collection form was used for data abstraction from the emergency department logbook and the complete medical record for all injured patients. Patient demographics, mechanism of injury, location, type and outcomes were recorded. RESULTS: Ten thousand six hundred twenty-two patients presented to the emergency department for evaluation and treatment during the 7-month study period. One thousand two hundred twenty-four patients (11.5%) had injuries. Males and individuals aged 15 to 44 years were most frequently injured, representing 73.4% and 57.8%, respectively. Road traffic injuries were the most common mechanism of injury, representing 43.9% of injuries. Head injuries (36.5%) and extremity injuries (59.5%) were the most common location of injury. The majority of injured patients, 59.3%, were admitted from the emergency department to the hospital wards, and 5.6%, required admission to an intensive care unit. Death occurred in 5.4% of injured patients. CONCLUSIONS: These data give a detailed and more robust picture of the patient demographics, mechanisms of injury, types of injury and patient outcomes from similar resource-limited settings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: There have been major changes in the management of anemia in US hemodialysis patients in recent years. We sought to determine the influence of clinical trial results, safety regulations, and changes in reimbursement policy on practice. METHODS: We examined indicators of anemia management among incident and prevalent hemodialysis patients from a medium-sized dialysis provider over three time periods: (1) 2004 to 2006 (2) 2007 to 2009, and (3) 2010. Trends across the three time periods were compared using generalized estimating equations. RESULTS: Prior to 2007, the median proportion of patients with monthly hemoglobin >12 g/dL for patients on dialysis 0 to 3, 4 to 6 and 7 to 18 months, respectively, was 42%, 55% and 46% declined to 41%, 54%, and 40% after 2007, and declined more sharply in 2010 to 34%, 41%, and 30%. Median weekly Epoeitin alpha doses over the same periods were 18,000, 12,400, and 9,100 units before 2007; remained relatively unchanged from 2007 to 2009; and decreased sharply in the patients 3-6 and 6-18 months on dialysis to 10,200 and 7,800 units, respectively in 2010. Iron doses, serum ferritin, and transferrin saturation levels increased over time with more pronounced increases in 2010. CONCLUSION: Modest changes in anemia management occurred between 2007 and 2009, followed by more dramatic changes in 2010. Studies are needed to examine the effects of declining erythropoietin use and hemoglobin levels and increasing intravenous iron use on quality of life, transplantation rates, infection rates and survival.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Several observational studies have evaluated the effect of a single exposure window with blood pressure (BP) medications on outcomes in incident dialysis patients, but whether BP medication prescription patterns remain stable or a single exposure window design is adequate to evaluate effect on outcomes is unclear. METHODS: We described patterns of BP medication prescription over 6 months after dialysis initiation in hemodialysis and peritoneal dialysis patients, stratified by cardiovascular comorbidity, diabetes, and other patient characteristics. The cohort included 13,072 adult patients (12,159 hemodialysis, 913 peritoneal dialysis) who initiated dialysis in Dialysis Clinic, Inc., facilities January 1, 2003-June 30, 2008, and remained on the original modality for at least 6 months. We evaluated monthly patterns in BP medication prescription over 6 months and at 12 and 24 months after initiation. RESULTS: Prescription patterns varied by dialysis modality over the first 6 months; substantial proportions of patients with prescriptions for beta-blockers, renin angiotensin system agents, and dihydropyridine calcium channel blockers in month 6 no longer had prescriptions for these medications by month 24. Prescription of specific medication classes varied by comorbidity, race/ethnicity, and age, but little by sex. The mean number of medications was 2.5 at month 6 in hemodialysis and peritoneal dialysis cohorts. CONCLUSIONS: This study evaluates BP medication patterns in both hemodialysis and peritoneal dialysis patients over the first 6 months of dialysis. Our findings highlight the challenges of assessing comparative effectiveness of a single BP medication class in dialysis patients. Longitudinal designs should be used to account for changes in BP medication management over time, and designs that incorporate common combinations should be considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The role of antibodies in chronic injury to organ transplants has been suggested for many years, but recently emphasized by new data. We have observed that when immunosuppressive potency decreases either by intentional weaning of maintenance agents or due to homeostatic repopulation after immune cell depletion, the threshold of B cell activation may be lowered. In human transplant recipients the result may be donor-specific antibody, C4d+ injury, and chronic rejection. This scenario has precise parallels in a rhesus monkey renal allograft model in which T cells are depleted with CD3 immunotoxin, or in a CD52-T cell transgenic mouse model using alemtuzumab to deplete T cells. Such animal models may be useful for the testing of therapeutic strategies to prevent DSA. We agree with others who suggest that weaning of immunosuppression may place transplant recipients at risk of chronic antibody-mediated rejection, and that strategies to prevent this scenario are needed if we are to improve long-term graft and patient outcomes in transplantation. We believe that animal models will play a crucial role in defining the pathophysiology of antibody-mediated rejection and in developing effective therapies to prevent graft injury. Two such animal models are described herein.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Development of hip adductor, tensor fascia lata, and rectus femoris muscle contractures following total hip arthroplasties are quite common, with some patients failing to improve despite treatment with a variety of non-operative modalities. The purpose of the present study was to describe the use of and patient outcomes of botulinum toxin injections as an adjunctive treatment for muscle tightness following total hip arthroplasty. METHODS: Ten patients (14 hips) who had hip adductor, abductor, and/or flexor muscle contractures following total arthroplasty and had been refractory to physical therapeutic efforts were treated with injection of botulinum toxin A. Eight limbs received injections into the adductor muscle, 8 limbs received injections into the tensor fascia lata muscle, and 2 limbs received injection into the rectus femoris muscle, followed by intensive physical therapy for 6 weeks. RESULTS: At a mean final follow-up of 20 months, all 14 hips had increased range in the affected arc of motion, with a mean improvement of 23 degrees (range, 10 to 45 degrees). Additionally all hips had an improvement in hip scores, with a significant increase in mean score from 74 points (range, 57 to 91 points) prior to injection to a mean of 96 points (range, 93 to 98) at final follow-up. There were no serious treatment-related adverse events. CONCLUSION: Botulinum toxin A injections combined with intensive physical therapy may be considered as a potential treatment modality, especially in difficult cases of muscle tightness that are refractory to standard therapy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2014, The International Biometric Society.A potential venue to improve healthcare efficiency is to effectively tailor individualized treatment strategies by incorporating patient level predictor information such as environmental exposure, biological, and genetic marker measurements. Many useful statistical methods for deriving individualized treatment rules (ITR) have become available in recent years. Prior to adopting any ITR in clinical practice, it is crucial to evaluate its value in improving patient outcomes. Existing methods for quantifying such values mainly consider either a single marker or semi-parametric methods that are subject to bias under model misspecification. In this article, we consider a general setting with multiple markers and propose a two-step robust method to derive ITRs and evaluate their values. We also propose procedures for comparing different ITRs, which can be used to quantify the incremental value of new markers in improving treatment selection. While working models are used in step I to approximate optimal ITRs, we add a layer of calibration to guard against model misspecification and further assess the value of the ITR non-parametrically, which ensures the validity of the inference. To account for the sampling variability of the estimated rules and their corresponding values, we propose a resampling procedure to provide valid confidence intervals for the value functions as well as for the incremental value of new markers for treatment selection. Our proposals are examined through extensive simulation studies and illustrated with the data from a clinical trial that studies the effects of two drug combinations on HIV-1 infected patients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The economic rationale for public intervention into private markets through price mechanisms is twofold: to correct market failures and to redistribute resources. Financial incentives are one such price mechanism. In this dissertation, I specifically address the role of financial incentives in providing social goods in two separate contexts: a redistributive policy that enables low income working families to access affordable childcare in the US and an experimental pay-for-performance intervention to improve population health outcomes in rural India. In the first two papers, I investigate the effects of government incentives for providing grandchild care on grandmothers’ short- and long-term outcomes. In the third paper, coauthored with Manoj Mohanan, Grant Miller, Katherine Donato, and Marcos Vera-Hernandez, we use an experimental framework to consider the the effects of financial incentives in improving maternal and child health outcomes in the Indian state of Karnataka.

Grandmothers provide a significant amount of childcare in the US, but little is known about how this informal, and often uncompensated, time transfer impacts their economic and health outcomes. The first two chapters of this dissertation address the impact of federally funded, state-level means-tested programs that compensate grandparent-provided childcare on the retirement security of older women, an economically vulnerable group of considerable policy interest. I use the variation in the availability and generosity of childcare subsidies to model the effect of government payments for grandchild care on grandmothers’ time use, income, earnings, interfamily transfers, and health outcomes. After establishing that more generous government payments induce grandmothers to provide more hours of childcare, I find that grandmothers adjust their behavior by reducing their formal labor supply and earnings. Grandmothers make up for lost earnings by claiming Social Security earlier, increasing their reliance on Supplemental Security Income (SSI) and reducing financial transfers to their children. While the policy does not appear to negatively impact grandmothers’ immediate economic well-being, there are significant costs to the state, in terms of both up-front costs for care payments and long-term costs as a result of grandmothers’ increased reliance on social insurance.

The final paper, The Role of Non-Cognitive Traits in Response to Financial Incentives: Evidence from a Randomized Control Trial of Obstetrics Care Providers in India, is coauthored with Manoj Mohanan, Grant Miller, Katherine Donato and Marcos Vera-Hernandez. We report the results from “Improving Maternal and Child Health in India: Evaluating Demand and Supply Side Strategies” (IMACHINE), a randomized controlled experiment designed to test the effectiveness of supply-side incentives for private obstetrics care providers in rural Karnataka, India. In particular, the experimental design compares two different types of incentives: (1) those based on the quality of inputs providers offer their patients (inputs contracts) and (2) those based on the reduction of incidence of four adverse maternal and neonatal health outcomes (outcomes contracts). Along with studying the relative effectiveness of the different financial incentives, we also investigate the role of provider characteristics, preferences, expectations and non-cognitive traits in mitigating the effects of incentive contracts.

We find that both contract types input incentive contracts reduce rates of post-partum hemorrhage, the leading cause of maternal mortality in India by about 20%. We also find some evidence of multitasking as output incentive contract providers reduce the level of postnatal newborn care received by their patients. We find that patient health improvements in response to both contract types are concentrated among higher trained providers. We find improvements in patient care to be concentrated among the lower trained providers. Contrary to our expectations, we also find improvements in patient health to be concentrated among the most risk averse providers, while more patient providers respond relatively little to the incentives, and these difference are most evident in the outputs contract arm. The results are opposite for patient care outcomes; risk averse providers have significantly lower rates of patient care and more patient providers provide higher quality care in response to the outputs contract. We find evidence that overconfidence among providers about their expectations about possible improvements reduces the effectiveness of both types of incentive contracts for improving both patient outcomes and patient care. Finally, we find no heterogeneous response based on non-cognitive traits.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2014, Canadian Anesthesiologists' Society.Optimal perioperative fluid management is an important component of Enhanced Recovery After Surgery (ERAS) pathways. Fluid management within ERAS should be viewed as a continuum through the preoperative, intraoperative, and postoperative phases. Each phase is important for improving patient outcomes, and suboptimal care in one phase can undermine best practice within the rest of the ERAS pathway. The goal of preoperative fluid management is for the patient to arrive in the operating room in a hydrated and euvolemic state. To achieve this, prolonged fasting is not recommended, and routine mechanical bowel preparation should be avoided. Patients should be encouraged to ingest a clear carbohydrate drink two to three hours before surgery. The goals of intraoperative fluid management are to maintain central euvolemia and to avoid excess salt and water. To achieve this, patients undergoing surgery within an enhanced recovery protocol should have an individualized fluid management plan. As part of this plan, excess crystalloid should be avoided in all patients. For low-risk patients undergoing low-risk surgery, a “zero-balance” approach might be sufficient. In addition, for most patients undergoing major surgery, individualized goal-directed fluid therapy (GDFT) is recommended. Ultimately, however, the additional benefit of GDFT should be determined based on surgical and patient risk factors. Postoperatively, once fluid intake is established, intravenous fluid administration can be discontinued and restarted only if clinically indicated. In the absence of other concerns, detrimental postoperative fluid overload is not justified and “permissive oliguria” could be tolerated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Copyright © 2014 International Anesthesia Research Society.BACKGROUND: Goal-directed fluid therapy (GDFT) is associated with improved outcomes after surgery. The esophageal Doppler monitor (EDM) is widely used, but has several limitations. The NICOM, a completely noninvasive cardiac output monitor (Cheetah Medical), may be appropriate for guiding GDFT. No prospective studies have compared the NICOM and the EDM. We hypothesized that the NICOM is not significantly different from the EDM for monitoring during GDFT. METHODS: One hundred adult patients undergoing elective colorectal surgery participated in this study. Patients in phase I (n = 50) had intraoperative GDFT guided by the EDM while the NICOM was connected, and patients in phase II (n = 50) had intraoperative GDFT guided by the NICOM while the EDM was connected. Each patient's stroke volume was optimized using 250- mL colloid boluses. Agreement between the monitors was assessed, and patient outcomes (postoperative pain, nausea, and return of bowel function), complications (renal, pulmonary, infectious, and wound complications), and length of hospital stay (LOS) were compared. RESULTS: Using a 10% increase in stroke volume after fluid challenge, agreement between monitors was 60% at 5 minutes, 61% at 10 minutes, and 66% at 15 minutes, with no significant systematic disagreement (McNemar P > 0.05) at any time point. The EDM had significantly more missing data than the NICOM. No clinically significant differences were found in total LOS or other outcomes. The mean LOS was 6.56 ± 4.32 days in phase I and 6.07 ± 2.85 days in phase II, and 95% confidence limits for the difference were -0.96 to +1.95 days (P = 0.5016). CONCLUSIONS: The NICOM performs similarly to the EDM in guiding GDFT, with no clinically significant differences in outcomes, and offers increased ease of use as well as fewer missing data points. The NICOM may be a viable alternative monitor to guide GDFT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Outcome assessment can support the therapeutic process by providing a way to track symptoms and functionality over time, providing insights to clinicians and patients, as well as offering a common language to discuss patient behavior/functioning. OBJECTIVES: In this article, we examine the patient-based outcome assessment (PBOA) instruments that have been used to determine outcomes in acupuncture clinical research and highlight measures that are feasible, practical, economical, reliable, valid, and responsive to clinical change. The aims of this review were to assess and identify the commonly available PBOA measures, describe a framework for identifying appropriate sets of measures, and address the challenges associated with these measures and acupuncture. Instruments were evaluated in terms of feasibility, practicality, economy, reliability, validity, and responsiveness to clinical change. METHODS: This study was a systematic review. A total of 582 abstracts were reviewed using PubMed (from inception through April 2009). RESULTS: A total of 582 citations were identified. After screening of title/abstract, 212 articles were excluded. From the remaining 370 citations, 258 manuscripts identified explicit PBOA; 112 abstracts did not include any PBOA. The five most common PBOA instruments identified were the Visual Analog Scale, Symptom Diary, Numerical Pain Rating Scales, SF-36, and depression scales such as the Beck Depression Inventory. CONCLUSIONS: The way a questionnaire or scale is administered can have an effect on the outcome. Also, developing and validating outcome measures can be costly and difficult. Therefore, reviewing the literature on existing measures before creating or modifying PBOA instruments can significantly reduce the burden of developing a new measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Ipsilateral hindfoot arthrodesis in combination with total ankle replacement (TAR) may diminish functional outcome and prosthesis survivorship compared to isolated TAR. We compared the outcome of isolated TAR to outcomes of TAR with ipsilateral hindfoot arthrodesis. METHODS: In a consecutive series of 404 primary TARs in 396 patients, 70 patients (17.3%) had a hindfoot fusion before, after, or at the time of TAR; the majority had either an isolated subtalar arthrodesis (n = 43, 62%) or triple arthrodesis (n = 15, 21%). The remaining 334 isolated TARs served as the control group. Mean patient follow-up was 3.2 years (range, 24-72 months). RESULTS: The SF-36 total, AOFAS Hindfoot-Ankle pain subscale, Foot and Ankle Disability Index, and Short Musculoskeletal Function Assessment scores were significantly improved from preoperative measures, with no significant differences between the hindfoot arthrodesis and control groups. The AOFAS Hindfoot-Ankle total, function, and alignment scores were significantly improved for both groups, albeit the control group demonstrated significantly higher scores in all 3 scales. Furthermore, the control group demonstrated a significantly greater improvement in VAS pain score compared to the hindfoot arthrodesis group. Walking speed, sit-to-stand time, and 4-square step test time were significantly improved for both groups at each postoperative time point; however, the hindfoot arthrodesis group completed these tests significantly slower than the control group. There was no significant difference in terms of talar component subsidence between the fusion (2.6 mm) and control groups (2.0 mm). The failure rate in the hindfoot fusion group (10.0%) was significantly higher than that in the control group (2.4%; p < 0.05). CONCLUSION: To our knowledge, this study represents the first series evaluating the clinical outcome of TARs performed with and without hindfoot fusion using implants available in the United States. At follow-up of 3.2 years, TAR performed with ipsilateral hindfoot arthrodesis resulted in significant improvements in pain and functional outcome; in contrast to prior studies, however, overall outcome was inferior to that of isolated TAR. LEVEL OF EVIDENCE: Level II, prospective comparative series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To ascertain the degree of variation, by state of hospitalization, in outcomes associated with traumatic brain injury (TBI) in a pediatric population. DESIGN: A retrospective cohort study of pediatric patients admitted to a hospital with a TBI. SETTING: Hospitals from states in the United States that voluntarily participate in the Agency for Healthcare Research and Quality's Healthcare Cost and Utilization Project. PARTICIPANTS: Pediatric (age ≤ 19 y) patients hospitalized for TBI (N=71,476) in the United States during 2001, 2004, 2007, and 2010. INTERVENTIONS: None. MAIN OUTCOME MEASURES: Primary outcome was proportion of patients discharged to rehabilitation after an acute care hospitalization among alive discharges. The secondary outcome was inpatient mortality. RESULTS: The relative risk of discharge to inpatient rehabilitation varied by as much as 3-fold among the states, and the relative risk of inpatient mortality varied by as much as nearly 2-fold. In the United States, approximately 1981 patients could be discharged to inpatient rehabilitation care if the observed variation in outcomes was eliminated. CONCLUSIONS: There was significant variation between states in both rehabilitation discharge and inpatient mortality after adjusting for variables known to affect each outcome. Future efforts should be focused on identifying the cause of this state-to-state variation, its relationship to patient outcome, and standardizing treatment across the United States.